该手稿解决了预测出院后全因住院再入院或死亡的同时问题,并量化放电放置在防止这些不良事件中的影响。为此,我们开发了一个固有的可解释的多级贝叶斯建模框架,该框架灵感来自重新激活的深神经网络的分段线性。在生存模型中,我们明确调整了混淆,以量化局部平均治疗效果以进行放电的干预措施。从2008年和2011年开始,我们对5%的Medicare受益人样本进行了培训,然后在2012年的索赔中测试了该模型。该模型对30天全因素外的再选中(使用官方CMS方法定义)的分类精度进行了评估,该模型对XGBoost,Logistic回归(功能工程后)和对同一数据进行训练的贝叶斯深神经网络的执行方式相似。该模型对30天的分类任务进行了预测的30天分类任务,该任务是使用剩下的未来数据进行测试,该模型的AUROC约为0.76,AUPRC约为0.50(相对于测试数据中的总体阳性速率),AUPRC的AUPRC达到了约0.76,而AUPRC的AUPRC则达到了AUPRC,则获得了AUPRC。证明人们不需要为准确性而牺牲可解释性。此外,该模型的测试AUROC为0.78,分类为90天全因素外再入院或死亡。我们很容易地凝视着我们固有的可解释模型,总结了其主要发现。此外,我们演示了Black-box Perthoc解释器工具的形状如何生成不受拟合模型支持的解释 - 如果以面值为单位,则没有提供足够的上下文来使模型可操作。
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Objective: We aim to develop an open-source natural language processing (NLP) package, SODA (i.e., SOcial DeterminAnts), with pre-trained transformer models to extract social determinants of health (SDoH) for cancer patients, examine the generalizability of SODA to a new disease domain (i.e., opioid use), and evaluate the extraction rate of SDoH using cancer populations. Methods: We identified SDoH categories and attributes and developed an SDoH corpus using clinical notes from a general cancer cohort. We compared four transformer-based NLP models to extract SDoH, examined the generalizability of NLP models to a cohort of patients prescribed with opioids, and explored customization strategies to improve performance. We applied the best NLP model to extract 19 categories of SDoH from the breast (n=7,971), lung (n=11,804), and colorectal cancer (n=6,240) cohorts. Results and Conclusion: We developed a corpus of 629 cancer patients notes with annotations of 13,193 SDoH concepts/attributes from 19 categories of SDoH. The Bidirectional Encoder Representations from Transformers (BERT) model achieved the best strict/lenient F1 scores of 0.9216 and 0.9441 for SDoH concept extraction, 0.9617 and 0.9626 for linking attributes to SDoH concepts. Fine-tuning the NLP models using new annotations from opioid use patients improved the strict/lenient F1 scores from 0.8172/0.8502 to 0.8312/0.8679. The extraction rates among 19 categories of SDoH varied greatly, where 10 SDoH could be extracted from >70% of cancer patients, but 9 SDoH had a low extraction rate (<70% of cancer patients). The SODA package with pre-trained transformer models is publicly available at https://github.com/uf-hobiinformatics-lab/SDoH_SODA.
translated by 谷歌翻译
This paper is a technical overview of DeepMind and Google's recent work on reinforcement learning for controlling commercial cooling systems. Building on expertise that began with cooling Google's data centers more efficiently, we recently conducted live experiments on two real-world facilities in partnership with Trane Technologies, a building management system provider. These live experiments had a variety of challenges in areas such as evaluation, learning from offline data, and constraint satisfaction. Our paper describes these challenges in the hope that awareness of them will benefit future applied RL work. We also describe the way we adapted our RL system to deal with these challenges, resulting in energy savings of approximately 9% and 13% respectively at the two live experiment sites.
translated by 谷歌翻译
The computational complexity of the self-attention mechanism in Transformer models significantly limits their ability to generalize over long temporal durations. Memory-augmentation, or the explicit storing of past information in external memory for subsequent predictions, has become a constructive avenue for mitigating this limitation. We argue that memory-augmented Transformers can benefit substantially from considering insights from the memory literature in humans. We detail an approach for integrating evidence from the human memory system through the specification of cross-domain linking hypotheses. We then provide an empirical demonstration to evaluate the use of surprisal as a linking hypothesis, and further identify the limitations of this approach to inform future research.
translated by 谷歌翻译
通过将从地面视图摄像头拍摄到从卫星或飞机上拍摄的架空图像的图像,通过将代理定位在搜索区域内,将代理定位在搜索区域内,将代理定位在搜索区域中。尽管地面图像和架空图像之间的观点差异使得跨视图地理定位具有挑战性,但假设地面代理可以使用全景相机,则取得了重大进展。例如,我们先前的工作(WAG)引入了搜索区域离散化,训练损失和粒子过滤器加权的变化,从而实现了城市规模的全景跨视图地理定位。但是,由于其复杂性和成本,全景相机并未在现有机器人平台中广泛使用。非Panoramic跨视图地理定位更适用于机器人技术,但也更具挑战性。本文介绍了受限的FOV广泛地理定位(Rewag),这是一种跨视图地理定位方法,通过创建姿势吸引的嵌入并提供将粒子姿势纳入暹罗网络,将其概括为与标准的非填充地面摄像机一起使用,以供与标准的非卧型地面摄像机一起使用。 Rewag是一种神经网络和粒子滤波器系统,能够在GPS下的环境中全球定位移动代理,仅具有探测仪和90度FOV摄像机,其本地化精度与使用全景相机实现并提高本地化精度相似的定位精度与基线视觉变压器(VIT)方法相比,100倍。一个视频亮点,该视频亮点在https://youtu.be/u_obqrt8qce上展示了几十公里的测试路径上的收敛。
translated by 谷歌翻译
本文通过讨论参加了为期三年的SubT竞赛的六支球队的不同大满贯策略和成果,报道了地下大满贯的现状。特别是,本文有四个主要目标。首先,我们审查团队采用的算法,架构和系统;特别重点是以激光雷达以激光雷达为中心的SLAM解决方案(几乎所有竞争中所有团队的首选方法),异质的多机器人操作(包括空中机器人和地面机器人)和现实世界的地下操作(从存在需要处理严格的计算约束的晦涩之处)。我们不会回避讨论不同SubT SLAM系统背后的肮脏细节,这些系统通常会从技术论文中省略。其次,我们通过强调当前的SLAM系统的可能性以及我们认为与一些良好的系统工程有关的范围来讨论该领域的成熟度。第三,我们概述了我们认为是基本的开放问题,这些问题可能需要进一步的研究才能突破。最后,我们提供了在SubT挑战和相关工作期间生产的开源SLAM实现和数据集的列表,并构成了研究人员和从业人员的有用资源。
translated by 谷歌翻译
迭代精致 - 从随机的猜测开始,然后迭代地改善猜测 - 是表示学习的有用范式,因为它提供了一种在数据中同样合理的解释之间打破对称性的方法。此属性使此类方法的应用可以推断实体集的表示,例如物理场景中的对象,在结构上类似于潜在空间中的聚类算法。但是,大多数先前的工作都通过展开的完善过程进行区分,这可能使优化挑战。我们观察到,可以通过隐式函数定理使此类方法可区分,并开发一种隐性分化方法,从而通过解耦来向前和向后传递来提高训练的稳定性和障碍。该连接使我们能够在优化隐式层时应用进步,不仅可以改善Slate中的插槽注意模块的优化,Slate是一种学习实体表示的最新方法,而且要在反向传播中持续不断的空间和时间复杂性。还有一条另外一行​​的代码。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
维基百科等文本知识库需要相当大的努力来保持最新和一致。虽然自动写作助手可能会缓解这一负担,但探讨了建议在外部知识基础上的建议的问题。在本文中,我们介绍了*忠实地反映了文本*(水果)中的更新信息的新颖一代任务是举办新证据的目标是更新现有文章。我们释放了Fruit-Wiki DataSet,一系列超过170k的远端监督数据,以及我们的数据生成管道和914个实例的金色评估集,其编辑被证据支持。我们为流行的发电系统以及编辑方法提供基准结果 - 根据编辑的基于T5的方法,我们介绍了确定现有技术的。我们的分析表明,开发可以忠实更新文章的开发模型需要新的神经发电模型的新功能,并打开更多新应用程序。
translated by 谷歌翻译